Last time, you conducted the Nanopore sequencing. For the following three sessions we will learn:
How to use Orion and conduct genome analysis
Quality check, Read filtering, mapping to the reference genome and variant calling
How to interpret summary statistics of Nanopore sequence data
How to interpret variant data
Quality check -> Trimming of low quality reads -> Quality check
Compare the overall reads quality between four conditions
Go to https://orion.nmbu.no/ at NMBU or with VPN.
In the Terminal/Command prompt, go to your directory. Review: the concept of current directry
cd your_directory
Let’s make a directory for analysis and enter in it.
mkdir pig_analysis # make directory "pig_analysis"
cd pig_analysis # set the current directory "pig_analysis"
Now, you will inspect the fastq file from your experiment, which contains Nanopore read information.
Review: look into a file content in a command line
for teachers: please_update_the_file_location_
zcat pigdata_fastq.gz | more
How a fastq file looks.
Each entry in a FASTQ files consists of 4 lines:
2.The sequence (the base calls; A, C, T, G and N).
A separator, which is simply a plus (+) sign.
The base call quality scores. These are Phred +33 encoded, using ASCII characters to represent the numerical quality scores.” quality score sheet
“zcat”-> look inside
“wc” -> word count
“-l” -> line
zcat /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz | wc -l
Now you got the number of lines in the fastq file.
How many sequence reads are in the fastq file?
What is the quality of the first 5bp ? what is the quality of bp between XX and YY ? Why do you think they are different ?
We see that there are 96000 lines in the fastq file.
As we learned that “each entry in a FASTQ files consists of 4 lines”, one read is corresponding to four lines. So in this file we have 96000/4 = 24000 reads.
The original fastq files may contain low quality reads. In this step, we will use “Nanoplot” to see the quality and lentgh of each read.
“Singularity” is a toolset on Orion to execute software. A variety of different bioinformatics tools are available in Singularity.
Make a slurm script like below and run it.
Review: run a slurm script by sbatch sbatch
#!/bin/bash
#SBATCH --job-name=Nanoplot # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END
##Activate conda environment
module load Miniconda3 && eval "$(conda shell.bash hook)"
### NB! Remember to use your own conda environment:
conda activate $SCRATCH/ToolBox/EUKVariantDetection
##Activating conda environments
conda activate $COURSES/BIO326/BestPracticesOrion/BLASTConda
echo "Working with this $CONDA_PREFIX environmet ..."
NanoPlot -t 8 --fastq /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz --plots dot --no_supplementary --no_static --N50 -p before
Nanoplot will generate the result files, named “before”xxx. Lets look into them…
# taking too long?
qlogin
cp /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/beforeNanoPlot-report.html beforeNanoPlot-report.html
Open “beforeNanoPlot-report.html” on your local computer
Everything you need in case scripts do not work well for teachers: please specify the location
ls /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data
# use cp command to copy files
# or run the full slurm script
sbatch /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/Bio326_2023_full.slurm
Filter low quality reads and short reads
Map the reads to the reference genome
Detect variants
for teachers: please replace singularity to conda
#!/bin/bash
#SBATCH --job-name=Nanoplot # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G
#SBATCH --ntasks=1
#SBATCH --mail-type=END
gunzip -c /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz | singularity exec /cvmfs/singularity.galaxyproject.org/all/nanofilt:2.8.0--py_0 NanoFilt -q 10 -l 500 | gzip > cleaned.pig.fastq.gz
-l, Filter on a minimum read length
-q, Filter on a minimum average read quality score
In this case, we are removing reads lower than quality score 10 and shorter than 500 bases.
Run Nanoplot again on the cleaned sequences.
#!/bin/bash
#SBATCH --job-name=Nanoplot # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END
singularity exec /cvmfs/singularity.galaxyproject.org/all/nanoplot:1.41.0--pyhdfd78af_0 NanoPlot -t 8 --fastq cleaned.pig.fastq.gz --N50 --no_supplementary --no_static --plots dot -p after
Open “afterNanoPlot-report.html” on your local computer.
# taking too long?
qlogin
cp /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/afterNanoPlot-report.html afterNanoPlot-report.html
Did you see the difference of read and quality distribution between before and after the filtering?
for teachers: please make the quality check results file for all experiments in a shared directory and specify the location
In case Singularity does not work … use conda
for teachers: please replace the bull ref. genome (ver.2023) to pig genome for teachers: please make four input fastq files, merging multiple fastq files under the same conditions, Vortex, Needle, Freeze and Crtl, cleaned.pig.fastq.gz should be replaced to the four files
#!/bin/bash
#SBATCH --job-name=Nanoplot # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END
singularity exec /cvmfs/singularity.galaxyproject.org/all/minimap2:2.24--h7132678_1 minimap2 -t 8 -a -a /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/Bos_taurus.fa.gz cleaned.pig.fastq.gz > pig.sam
# updated (the reference Bos taurus fasta location)
# convert the sam file to bam format
singularity exec /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools view -S -b pig.sam > pig0.bam
## sort the bam file
singularity exec /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools sort pig0.bam -o pig.bam
# index the bam file
singularity exec /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools index -M pig.bam
# Variant Calling using Sniffles
singularity exec /cvmfs/singularity.galaxyproject.org/all/sniffles:2.0.7--pyhdfd78af_0 sniffles --input pig.bam --vcf pig.vcf
# taking too long?
qlogin
ls /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/
# and copy the file you need (the final product is .vcf file)
Now you got the variant file!
# INFO field
grep '^##' pig.vcf | tail -n 20
# variants
grep -v '^##' pig.vcf | more
Important parameters
1 16849578 : location of the variant
SVTYPE=DEL;SVLEN=-60 : size and type of the variant
0/1 : genotype
(you can open a vcf file in notepad, excel etc.)
Now you have variants! Lets see what genes are affected by the variants.
First we will select a random variant to investigate
#Check the number of variant in the file
NBVAR=$(bcftools index -n pig.vcf)
## sample a random number
RANDOMVAR=$(echo $((RANDOM % $NBVAR + 1)))
## let's check the variant sampled
bcftools view -H pig.vcf | sed -n ${RANDOMVAR}p
Go to VEP (Variant Effect Predictor)
Variant Effect predictor tells us where in the genome the discovered variants are located (genic, regulartory, etc…)
Select “cow” as the reference species.
Upload: pig.vcf - downloaded from Orion or the section above as the file to investigate.
There are 428 variants; 88 genes are affected by these varaints.
What are the most affected genes?
Click “Filters” and set “Impact is HIGH” to select highly impact variants.
There are some frameshift/transcript ablation variants.
Let’s closely investigate your variant !
Find your variant by downloading the .txt file